Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
The recent public releases of AI tools such as ChatGPT have forced computer science educators to reconsider how they teach. These tools have demonstrated considerable ability to generate code and answer conceptual questions, rendering them incredibly useful for completing CS coursework. While overreliance on AI tools could hinder students’ learning, we believe they have the potential to be a helpful resource for both students and instructors alike. We propose a novel system for instructor-mediated GPT interaction in a class discussion board. By automatically generating draft responses to student forum posts, GPT can help Teaching Assistants (TAs) respond to student questions in a more timely manner, giving students an avenue to receive fast, quality feedback on their solutions without turning to ChatGPT directly. Additionally, since they are involved in the process, instructors can ensure that the information students receive is accurate, and can provide students with incremental hints that encourage them to engage critically with the material, rather than just copying an AI-generated snippet of code. We utilize Piazza—a popular educational forum where TAs help students via text exchanges—as a venue for GPT-assisted TA responses to student questions. These student questions are sent to GPT-4 alongside assignment instructions and a customizable prompt, both of which are stored in editable instructor-only Piazza posts. We demonstrate an initial implementation of this system, and provide examples of student questions that highlight its benefits.more » « less
-
The success of GPT with coding tasks has made it important to consider the impact of GPT and similar models on teaching programming. Students’ use of GPT to solve programming problems can hinder their learning. However, they might also get significant benefits such as quality feedback on programming style, explanations of howa given piece of codeworks, helpwith debugging code, and the ability to see valuable alternatives to their code solutions. We propose a newdesign for interactingwith GPT calledMediated GPT with the goals of (a) providing students with access to GPT but allowing instructors to programmatically modify responses to prevent hindrances to student learning and combat common GPT response concerns, (b) helping students generate and learn to create effective prompts to GPT, and (c) tracking how students use GPT to get help on programming exercises. We demonstrate a first-pass implementation of this design called NotebookGPT.more » « less
-
Introducing concurrent execution, forking, joining, synchronization, and load balancing of Java threads to trainees allows them to (a) create arbitrary concurrent algorithms, and (b) be exposed to the underpinnings of concurrency concepts. However, it requires the sacrifice of some existing concepts in the course in which such training is added. To keep this sacrifice low, we ambitiously explored if such concepts can be effectively introduced and tested in a single class period, which is approximately an hour, without a live lecture. Students were asked to learn the concurrency concepts by reading, running, fixing, and testing an existing concurrent program, and taking a quiz. They had varying knowledge of concurrency and Java threads but had not implemented concurrent Java programs. Both in-person and remote help were offered. They were allowed to finish their work after class, within a week. The vast majority of them who started on time finished the coding correctly and gave satisfactory quiz answers in ninety minutes. This experience suggests that such hands-on training can be usefully added to courses for training students and instructors that provide no other training in concurrency or training in declarative concepts. Our key ideas can be applied to languages other than Java.more » « less
-
Today, AI tools are generally considered as education disrupters. In this paper, we put them in context with more traditional tools, showing how they complement the pedagogical potential of the former. We motivate a set of specific novel ways in which state-of-the-art tools, individually and together, can influence the teaching of concurrency. The pedagogy tasks we consider are illustrating concepts, creating motivating and debuggable assignments, assessing the runtime behavior and source code of solutions manually and automatically, generating model solutions for code and essay questions, discussing conceptual questions in class, and being aware of in-progress work. We use examples from past courses and training sessions in which we have been involved to illustrate the potential and actual influence of tools on these tasks. Some of the tools we consider are popular tools such as interactive programming environments and chat tools - we show novel uses of them. Some of the others such as testing and visualization tools are in-use novel tools - we discuss how they been used. The final group consists of AI tools such as ChatGPT 3.5 and 4.0 - we discuss their potential and how they can be integrated with traditional tools to realize this potential. We also show that version 4.0 has a better understanding of advanced concepts in synchronization and coordination than version 3.5, and both have a remarkable ability to understand concepts in concurrency, which can be expected to grow with advances in AI.more » « less
-
We have developed a software infrastructure for testing multi-threaded programs that implement the fork-join concurrency model. The infrastructure employs several key ideas: The student solutions use print statements to trace the execution of the fork-join phases. The test writer provides a high-level specification of the problem-specific aspects of the traces, which is used by the infrastructure to handle the problem-independent and low-level details of processing the traces. During performance testing, trace output is disabled automatically. During functionality testing, fine-grained feedback is provided to identify the correct and incorrect implementation of the various fork-join phases. Tests written using our infrastructure have been used in an instructor-training workshop as an instructor agent clarifying requirements and checking in-progress work. The size of the code to check the concurrency correctness of final and intermediate results was far smaller than the code to check the serial correctness of such resultsmore » « less
-
As interest in programming as a major grows, instructors must accommodate more students in their programming courses. One particularly challenging aspect of this growth is providing quality assistance to students during in-class and out-of-class programming exercises. Prior work proposes using instructor dashboards to help instructors combat these challenges. Further, the introduction of ChatGPT represents an exciting avenue to assist instructors with programming exercises but needs a delivery method for this assistance. We propose a revision of a current instructor dashboard Assistant Dashboard Plus that extends an existing dashboard with two new features: (a) identifying students in difficulty so that instructors can effectively assist them, and (b) providing instructors with pedagogically relevant groupings of students’ exercise solutions with similar implementations so that instructors can provide overlapping code style feedback to students within the same group. For difficulty detection, it uses a state-of-the-art algorithm for which a visualization has not been created. For code clustering, it uses GPT. We present a first-pass implementation of this dashboardmore » « less
-
As part of a 3-day workshop on training faculty members in concurrency, we developed a module for hands-on training in Java Fork-Join abstractions that had several related novel pedagogical and technical components: (1) Source and runtime checks that (a) tested whether test-aware code created by the trainees met the expected requirements and (b) logged their results in the local file system and the IBM cloud. (2) Editable worked example code along with a guide on how to understand the underlying concepts behind the code and experiment with the code. (3) The ability to follow the guide (a) synchronously, with graduate student help, in a session devoted to this module, and (b) asynchronously, on one's own, before or after the synchronous session. (4) Assignments trainees could do after experimenting with the worked example. (5) Zoom recording of the entire synchronous session. Fourteen faculty members across the country attended the session and had varying amounts of knowledge of Java and automatic assessment. Data gathered from check logs and a Zoom recording, together with novel visualizations of them, provide information to evaluate our pedagogical model and differentiate the participants.more » « less
-
During the Covid pandemic, we gave a Java assignment that exercised threads, synchronization, and coordination and wrote tests to check each concurrency aspect of the assignment. We used four different technologies to record events related to work on this assignment: the Piazza discussion forum, the Zoom conferencing system, an Eclipse plugin, and a testing framework. The recorded data have given the instructors of the course broad awareness of several aspects of student work: How much time did a student spend on an assignment? How many attempts students made on thread, synchronization, and coordination tests before they reached their final scores? How many times did they go to Piazza or use Zoom-supported office-hour visits to fix concurrency problems, and what was the nature of these problems? How effective was Zoom transcription to classify the office hour problems? How long and effective were the office hour visits, and to what extent was screen sharing used during these visits? To what extent did students use the tests to determine if they had met assignment requirements? These data, in turn, have provided us with preliminary answers to a variety of questions we had about unseen work and the concurrency aspects of the assignment. While the answers may be specific to our assignment, the questions answered by these mechanisms can be expected to apply to other settings.more » « less
-
Existing techniques for automating the testing of sequential programming assignments are fundamentally at odds with concurrent programming as they are oblivious to the algorithm used to implement the assignments. We have developed a framework that addresses this limitation for those object-based concurrent assignments whose user-interface (a) is implemented using the observer pattern and (b) makes apparent whether concurrency requirements are met. It has two components. The first component reduces the number of steps a human grader needs to take to interact with and score the user-interfaces of the submitted programs. The second component completely automates assessment by observing the events sent by the student-implemented observable objects. Both components are used to score the final submission and log interaction. The second component is also used to provide feedback during assignment implementation. Our experience shows that the framework is used extensively by students, leads to more partial credit, reduces grading time, and gives statistics about incremental student progressmore » « less
An official website of the United States government
